Goto

Collaborating Authors

 firing pattern


Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems

Neural Information Processing Systems

Grid cells in the mammalian brain are fundamental to spatial navigation, and therefore crucial to how animals perceive and interact with their environment. Traditionally, grid cells are thought support path integration through highly symmetric hexagonal lattice firing patterns. However, recent findings show that their firing patterns become distorted in the presence of significant spatial landmarks such as rewarded locations. This introduces a novel perspective of dynamic, subjective, and action-relevant interactions between spatial representations and environmental cues. Here, we propose a practical and theoretical framework to quantify and explain these interactions.


Rethinking the Membrane Dynamics and Optimization Objectives of Spiking Neural Networks

Neural Information Processing Systems

Despite spiking neural networks (SNNs) have demonstrated notable energy efficiency across various fields, the limited firing patterns of spiking neurons within fixed time steps restrict the expression of information, which impedes further improvement of SNN performance. In addition, current implementations of SNNs typically consider the firing rate or average membrane potential of the last layer as the output, lacking exploration of other possibilities. In this paper, we identify that the limited spike patterns of spiking neurons stem from the initial membrane potential (IMP), which is set to 0. By adjusting the IMP, the spiking neurons can generate additional firing patterns and pattern mappings. Furthermore, we find that in static tasks, the accuracy of SNNs at each time step increases as the membrane potential evolves from zero. This observation inspires us to propose a learnable IMP, which can accelerate the evolution of membrane potential and enables higher performance within a limited number of time steps. Additionally, we introduce the last time step (LTS) approach to accelerate convergence in static tasks, and we propose a label smooth temporal efficient training (TET) loss to mitigate the conflicts between optimization objective and regularization term in the vanilla TET. Our methods improve the accuracy by 4.05\% on ImageNet compared to baseline and achieve state-of-the-art performance of 87.80\% on CIFAR10-DVS and 87.86\% on N-Caltech101.


Model of human cognition

Yonggang, Wu

arXiv.org Artificial Intelligence

Recently, there has been immense development in the field of artificial intelligence (AI) and computational neuroscienc e. Numerous architecture s and models have been implemented in artificial systems to challenge human intelligence, especially with the release of increasingly proficient large language model s (LLMs) . However, despite advancement s in LLMs, artificial systems still fall short in matching the human capacity for generalisation across diverse tasks and environments, thus being an overstatement to label the current generation s of LLMs as artificial general intelligence (AGI) . We propose that in order to create artificial systems with high generalisation capabilities, one must first examine and understand the fundamentals of human cognition through conceptual model s of the brain. This paper introduce s a theoretical model of cognition that integrates biological plausibility and functionality, encapsulating the fundamental elements of cognition and accounting for many psychological and behavioural regularities. The model consists of four main modules: the v isual processing module, the semantic module, the predictive module, and the executive module . The modules are discussed in chronological order, with each being affiliated with corresponding anatomical regions of the brain . Thereafter, the model is substantiated with real - world examples and that reflect its general problem - solving capabilities .






Mimicking associative learning of rats via a neuromorphic robot in open field maze using spatial cell models

Liu, Tianze, Siddique, Md Abu Bakr, An, Hongyu

arXiv.org Artificial Intelligence

--Data-driven Artificial Intelligence (AI) approaches have exhibited remarkable prowess across various cognitive tasks using extensive training data. However, the reliance on large datasets and neural networks presents challenges such as high-power consumption and limited adaptability, particularly in SWaP-constrained applications like planetary exploration. T o address these issues, we propose enhancing the autonomous capabilities of intelligent robots by emulating the associative learning observed in animals. Associative learning enables animals to adapt to their environment by memorizing concurrent events. By replicating this mechanism, neuromorphic robots can navigate dynamic environments autonomously, learning from interactions to optimize performance. This paper explores the emulation of associative learning in rodents using neuromorphic robots within open-field maze environments, leveraging insights from spatial cells such as place and grid cells. By integrating these models, we aim to enable online associative learning for spatial tasks in real-time scenarios, bridging the gap between biological spatial cognition and robotics for advancements in autonomous systems.


Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems

Neural Information Processing Systems

Grid cells in the mammalian brain are fundamental to spatial navigation, and therefore crucial to how animals perceive and interact with their environment. Traditionally, grid cells are thought support path integration through highly symmetric hexagonal lattice firing patterns. However, recent findings show that their firing patterns become distorted in the presence of significant spatial landmarks such as rewarded locations. This introduces a novel perspective of dynamic, subjective, and action-relevant interactions between spatial representations and environmental cues. Here, we propose a practical and theoretical framework to quantify and explain these interactions.


Rethinking the Membrane Dynamics and Optimization Objectives of Spiking Neural Networks

Neural Information Processing Systems

Despite spiking neural networks (SNNs) have demonstrated notable energy efficiency across various fields, the limited firing patterns of spiking neurons within fixed time steps restrict the expression of information, which impedes further improvement of SNN performance. In addition, current implementations of SNNs typically consider the firing rate or average membrane potential of the last layer as the output, lacking exploration of other possibilities. In this paper, we identify that the limited spike patterns of spiking neurons stem from the initial membrane potential (IMP), which is set to 0. By adjusting the IMP, the spiking neurons can generate additional firing patterns and pattern mappings. Furthermore, we find that in static tasks, the accuracy of SNNs at each time step increases as the membrane potential evolves from zero. This observation inspires us to propose a learnable IMP, which can accelerate the evolution of membrane potential and enables higher performance within a limited number of time steps.